33 research outputs found

    Sheep and goats:manipulating visual perception through colour relationships

    Get PDF
    Sheep and Goats hides visual messages in plain sight. It is a print diptych which investigates the idea that artwork can be intentionally created to be experienced differently dependent on one’s visual abilities. Each silk-screened/ink-jet print is 84 cm x 112 cm. It is accompanied by a smart device fitted with augmented reality colour vision deficiency simulation and recolouring software. The collaboration of artist David Lyons with computer scientist David Flatla resulted in prints which communicate unique details exclusively to those colour blindness, while simultaneously containing imagery that those with typical colour vision experience. This was done through the use and understanding of colour theory, artistic principles and computer science applications. All the artwork is revealed to both audiences through the use of tablets whose software allows the translation of the imagery between the two audiences. The tablets with CVD simulation and recolouring software allow those with typical colour sight to view what those with colour blindness see, and those with colour blindness to gain an appreciation of what individuals with typical sight see.To indicate engagement of audiences of varied colour vision abilities, Triple Blind reference the circles of the Ishihara Colour Blind Test. The dualistic words ‘heaven’ and ‘HELL’ are used to suggest conflicting perceptions as are the clear varnish over-printed lyrics from the song “Sheep go to Heaven’ by the rock band Cake. This paper documents the development of the work, its theoretical underpinnings and artistic and social and philosophical implications

    A predictive model of colour differentiation

    Get PDF
    The ability to differentiate between colours varies from individual to individual. This variation is attributed to factors such as the presence of colour blindness. Colour is used to encode information in information visualizations. An example of such an encoding is categorization using colour (e.g., green for land, blue for water). As a result of the variation in colour differentiation ability among individuals, many people experience difficulties when using colour-encoded information visualizations. These difficulties result from the inability to adequately differentiate between two colours, resulting in confusion, errors, frustration, and dissatisfaction. If a user-specific model of colour differentiation was available, these difficulties could be predicted and corrected. Prediction and correction of these difficulties would reduce the amount of confusion, errors, frustration, and dissatisfaction experienced by users. This thesis presents a model of colour differentiation that is tuned to the abilities of a particular user. To construct this model, a series of judgement tasks are performed by the user. The data from these judgement tasks is used to calibrate a general colour differentiation model to the user. This calibrated model is used to construct a predictor. This predictor can then be used to make predictions about the user's ability to differentiate between two colours. Two participant-based studies were used to evaluate this solution. The first study evaluated the basic approach used to model colour differentiation. The second study evaluated the accuracy of the predictor by comparing its performance to the performance of human participants. It was found that the predictor was as accurate as the human participants 86.3% of the time. Using such a predictor, the colour differentiation abilities of particular users can be accurately modeled

    Eye for an eye:an exploration of visual abilities and perception

    Get PDF
    A selection of print works, motion graphics and interactive smart devices created by artist David Lyons and computer scientist David Flatla, exploring visual abilities and perception

    Individualized Models of Colour Differentiation through Situation-Specific Modelling

    Get PDF
    In digital environments, colour is used for many purposes: for example, to encode information in charts, signify missing field information on websites, and identify active windows and menus. However, many people have inherited, acquired, or situationally-induced Colour Vision Deficiency (CVD), and therefore have difficulties differentiating many colours. Recolouring tools have been developed that modify interface colours to make them more differentiable for people with CVD, but these tools rely on models of colour differentiation that do not represent the majority of people with CVD. As a result, existing recolouring tools do not help most people with CVD. To solve this problem, I developed Situation-Specific Modelling (SSM), and applied it to colour differentiation to develop the Individualized model of Colour Differentiation (ICD). SSM utilizes an in-situ calibration procedure to measure a particular user’s abilities within a particular situation, and a modelling component to extend the calibration measurements into a full representation of the user’s abilities. ICD applies in-situ calibration to measuring a user’s unique colour differentiation abilities, and contains a modelling component that is capable of representing the colour differentiation abilities of almost any individual with CVD. This dissertation presents four versions of the ICD and one application of the ICD to recolouring. First, I describe the development and evaluation of a feasibility implementation of the ICD that tests the viability of the SSM approach. Second, I present revised calibration and modelling components of the ICD that reduce the calibration time from 32 minutes to two minutes. Next, I describe the third and fourth ICD versions that improve the applicability of the ICD to recolouring tools by reducing the colour differentiation prediction time and increasing the power of each prediction. Finally, I present a new recolouring tool (ICDRecolour) that uses the ICD model to steer the recolouring process. In a comparative evaluation, ICDRecolour achieved 90% colour matching accuracy for participants – 20% better than existing recolouring tools – for a wide range of CVDs. By modelling the colour differentiation abilities of a particular user in a particular environment, the ICD enables the extension of recolouring tools to helping most people with CVD, thereby reducing the difficulties that people with CVD experience when using colour in digital environments

    MirrorMirror:A Mobile Application to Improve Speechreading Acquisition

    Get PDF

    Designing game-based myoelectric prosthesis training

    Get PDF
    A myoelectric prosthesis (myo) is a dexterous artificial limb controlled by muscle contractions. Learning to use a myo can be challenging, so extensive training is often required to use a myo prosthesis effectively. Signal visualizations and simple muscle-controlled games are currently used to help patients train their muscles, but are boring and frustrating. Furthermore, current training systems require expensive medical equipment and clinician oversight, restricting training to infrequent clinical visits. To address these limitations, we developed a new game that promotes fun and success, and shows the viability of a low-cost myoelectric input device. We adapted a user-centered design (UCD) process to receive feedback from patients, clinicians, and family members as we iteratively addressed challenges to improve our game. Through this work, we introduce a free and open myo training game, provide new information about the design of myo training games, and reflect on an adapted UCD process for the practical iterative development of therapeutic games

    Gaze-contingent manipulation of color perception

    Get PDF
    Using real time eye tracking, gaze-contingent displays can modify their content to represent depth (e.g., through additional depth cues) or to increase rendering performance (e.g., by omitting peripheral detail). However, there has been no research to date exploring how gaze-contingent displays can be leveraged for manipulating perceived color. To address this, we conducted two experiments (color matching and sorting) that manipulated peripheral background and object colors to influence the user's color perception. Findings from our color matching experiment suggest that we can use gaze-contingent simultaneous contrast to affect color appearance and that existing color appearance models might not fully predict perceived colors with gaze-contingent presentation. Through our color sorting experiment we demonstrate how gaze-contingent adjustments can be used to enhance color discrimination. Gaze-contingent color holds the promise of expanding the perceived color gamut of existing display technology and enabling people to discriminate color with greater precision.Postprin

    Beyond Accessibility:Lifting Perceptual Limitations for Everyone

    Get PDF
    We propose that accessibility research can lay the foundation for technology that can be used to augment the perception of everyone. To show how this can be achieved, we present three case studies of our research in which we demonstrate our approaches for impaired colour vision, situational visual impairments and situational hearing impairment.Comment: 5 pages, 4 figure
    corecore